Goto

Collaborating Authors

 additive importance measure


Understanding Global Feature Contributions With Additive Importance Measures

Neural Information Processing Systems

Understanding the inner workings of complex machine learning models is a long-standing problem and most recent research has focused on local interpretability. To assess the role of individual input features in a global sense, we explore the perspective of defining feature importance through the predictive power associated with each feature. We introduce two notions of predictive power (model-based and universal) and formalize this approach with a framework of additive importance measures, which unifies numerous methods in the literature. We then propose SAGE, a model-agnostic method that quantifies predictive power while accounting for feature interactions. Our experiments show that SAGE can be calculated efficiently and that it assigns more accurate importance values than other methods.



Understanding Global Feature Contributions With Additive Importance Measures

Neural Information Processing Systems

Understanding the inner workings of complex machine learning models is a long-standing problem and most recent research has focused on local interpretability. To assess the role of individual input features in a global sense, we explore the perspective of defining feature importance through the predictive power associated with each feature. We introduce two notions of predictive power (model-based and universal) and formalize this approach with a framework of additive importance measures, which unifies numerous methods in the literature. We then propose SAGE, a model-agnostic method that quantifies predictive power while accounting for feature interactions. Our experiments show that SAGE can be calculated efficiently and that it assigns more accurate importance values than other methods.


Review for NeurIPS paper: Understanding Global Feature Contributions With Additive Importance Measures

Neural Information Processing Systems

Weaknesses: The main limitation of the proposed method is lack of novelty. The use of Shapley values has been tried before, such as in SHAP, so the main contribution here seems to be the approximation of SAGE using sampling. The paper should have then focused more on this particular contribution and on how this could be better than prior related methods. Still on this note, the paper focuses too much on conceptual reviews, but pushes most of the results and discussion to supplements. It is not clear what is the real advantage of SAGE. Computing Shapley values directly is computationally prohibitive even for a reasonable amount of features, so the authors resort to sampling.


Review for NeurIPS paper: Understanding Global Feature Contributions With Additive Importance Measures

Neural Information Processing Systems

The reviews were split on the paper with different reviewers focusing on different aspects. There were some concerns about the novelty and the empirical demonstration. However, at the same time, there was also an appreciation for conceptual contributions and the depth (not breadth) of the experiments. Overall, there seems to be a significant contribution made. The authors should reconsider the decision to put most experiments (all but one) in the supplementary material, which made judging the empirical contribution of the paper difficult. But the experimental analysis is well done and shows some convincing evidence.


Understanding Global Feature Contributions With Additive Importance Measures

Neural Information Processing Systems

Understanding the inner workings of complex machine learning models is a long-standing problem and most recent research has focused on local interpretability. To assess the role of individual input features in a global sense, we explore the perspective of defining feature importance through the predictive power associated with each feature. We introduce two notions of predictive power (model-based and universal) and formalize this approach with a framework of additive importance measures, which unifies numerous methods in the literature. We then propose SAGE, a model-agnostic method that quantifies predictive power while accounting for feature interactions. Our experiments show that SAGE can be calculated efficiently and that it assigns more accurate importance values than other methods.


Understanding Global Feature Contributions Through Additive Importance Measures

Covert, Ian, Lundberg, Scott, Lee, Su-In

arXiv.org Machine Learning

Understanding the inner workings of complex machine learning models is a long-standing problem, with recent research focusing primarily on local interpretability. To assess the role of individual input features in a global sense, we propose a new feature importance method, Shapley Additive Global importancE (SAGE), a model-agnostic measure of feature importance based on the predictive power associated with each feature. SAGE relates to prior work through the novel framework of additive importance measures, a perspective that unifies numerous other feature importance methods and shows that only SAGE properly accounts for complex feature interactions. We define SAGE using the Shapley value from cooperative game theory, which leads to numerous intuitive and desirable properties. Our experiments apply SAGE to eight datasets, including MNIST and breast cancer subtype classification, and demonstrate its advantages through quantitative and qualitative evaluations.